在本文中,我们开发Faceqgen,基于生成的对抗网络的面部图像的No参考质量评估方法,其产生与面部识别精度相关的标量质量测量。 Faceqgen不需要标记为培训的质量措施。它从使用SCFace数据库从头开始培训。 Faceqgen将图像恢复应用于未知质量的面部图像,将其转换为规范的高质量图像,即正面姿势,均匀的背景等。质量估计是原始图像和恢复图像之间的相似性,因为低质量图像由于恢复而体验更大的变化。我们比较三种不同的数值质量措施:a)原始和恢复的图像之间的MSE,b)他们的SSIM和c)甘杆菌鉴别器的输出得分。结果表明,面部QGEN的质量措施是面部识别准确性的良好估计。我们的实验包括与针对面部和一般图像设计的其他质量评估方法的比较,以便在现有技术中定位面部。这种比较表明,即使面对面识别准确性预测方面不超过最佳现有的面部质量评估方法,它也实现了足够的结果,以证明质量估计的半监督学习方法的潜力(特别是数据 - 基于每个受试者的单一高质量图像的驱动学习),具有提高未来性能的能力,通过对模型的充分改进以及竞争方法的显着优势,不需要质量标签的发展。这使得Faceqgen灵活且可扩展,而无需昂贵的数据策激。
translated by 谷歌翻译
本章的主要范围是作为面部介绍攻击检测的介绍,包括过去几年的关键资源和领域的进步。下一页呈现了面部识别系统可以面对的不同演示攻击,其中攻击者向传感器提供给传感器,主要是相机,呈现攻击仪器(PAI),这通常是照片,视频或掩码,试图冒充真正的用户。首先,我们介绍了面部识别的现状,部署水平及其挑战。此外,我们介绍了面部识别系统可能暴露的漏洞和可能的攻击,表明呈现攻击检测方法的高度重要性。我们审核不同类型的演示攻击方法,从更简单到更复杂,在哪个情况下它们可能是有效的。然后,我们总结了最受欢迎的演示文稿攻击检测方法来处理这些攻击。最后,我们介绍了研究界使用的公共数据集,以探索面部生物识别性的脆弱性,以呈现攻击,并对已知的PAI制定有效的对策。
translated by 谷歌翻译
在本文中,我们开发FaceQVEC,一种软件组件,用于估计ISO / IEC 19794-5中所考虑的每个要点的面部图像的符合性,这是一个质量标准,该标准定义了将它们可接受或不可接受的面部图像的一般质量指南用于官方文件,如护照或身份证。这种质量评估的工具可以有助于提高面部识别的准确性,并确定哪些因素影响给定的面部图像的质量,并采取行动消除或减少这些因素,例如,具有后处理技术或重新获取图像。 FaceQVEC由与上述标准中预期的不同点相关的25个单独测试的自动化,以及被认为与面部质量有关的图像的其他特征。我们首先包括在现实条件下捕获的开发数据集上评估的质量测试的结果。我们使用这些结果来调整每个测试的判定阈值。然后,我们再次在评估数据库中再次检查,该评估数据库包含在开发期间未见的新脸部图像。评估结果展示了个人测试的准确性,用于检查遵守ISO / IEC 19794-5。 Faceqvec可在线获取(https://github.com/uam-biometrics/faceqvec)。
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译
Content moderation is the process of screening and monitoring user-generated content online. It plays a crucial role in stopping content resulting from unacceptable behaviors such as hate speech, harassment, violence against specific groups, terrorism, racism, xenophobia, homophobia, or misogyny, to mention some few, in Online Social Platforms. These platforms make use of a plethora of tools to detect and manage malicious information; however, malicious actors also improve their skills, developing strategies to surpass these barriers and continuing to spread misleading information. Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems. In response to this recent ongoing issue, this paper presents an innovative approach to address this linguistic trend in social networks through the simulation of different content evasion techniques and a multilingual Transformer model for content evasion detection. In this way, we share with the rest of the scientific community a multilingual public tool, named "pyleetspeak" to generate/simulate in a customizable way the phenomenon of content evasion through automatic word camouflage and a multilingual Named-Entity Recognition (NER) Transformer-based model tuned for its recognition and detection. The multilingual NER model is evaluated in different textual scenarios, detecting different types and mixtures of camouflage techniques, achieving an overall weighted F1 score of 0.8795. This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content on social networks, making the fight against information disorders more effective.
translated by 谷歌翻译
In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file.
translated by 谷歌翻译
There has been significant work recently in developing machine learning models in high energy physics (HEP), for tasks such as classification, simulation, and anomaly detection. Typically, these models are adapted from those designed for datasets in computer vision or natural language processing without necessarily incorporating inductive biases suited to HEP data, such as respecting its inherent symmetries. Such inductive biases can make the model more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $\mathrm{SO}^+(3,1)$, with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it significantly outperforms a non-Lorentz-equivariant graph neural network baseline on compression and reconstruction, and anomaly detection. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can have a significant impact on the explainability of anomalies found by such black-box machine learning models.
translated by 谷歌翻译
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering in different fields. Particularly, it is a recurring theme in Reinforcement Learning research, where forward models approximate the state transition function of a Markov Decision Process by learning a mapping function from current state and action to the next state. This problem is commonly defined as a Supervised Learning problem in a direct way. This common approach faces several difficulties due to the inherent complexities of the dynamics to learn, for example, delayed effects, high non-linearity, non-stationarity, partial observability and, more important, error accumulation when using bootstrapped predictions (predictions based on past predictions), over large time horizons. Here we explore the use of Reinforcement Learning in this problem. We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
translated by 谷歌翻译
The estimation of the generalization error of classifiers often relies on a validation set. Such a set is hardly available in few-shot learning scenarios, a highly disregarded shortcoming in the field. In these scenarios, it is common to rely on features extracted from pre-trained neural networks combined with distance-based classifiers such as nearest class mean. In this work, we introduce a Gaussian model of the feature distribution. By estimating the parameters of this model, we are able to predict the generalization error on new classification tasks with few samples. We observe that accurate distance estimates between class-conditional densities are the key to accurate estimates of the generalization performance. Therefore, we propose an unbiased estimator for these distances and integrate it in our numerical analysis. We show that our approach outperforms alternatives such as the leave-one-out cross-validation strategy in few-shot settings.
translated by 谷歌翻译
One of the main limitations of the commonly used Absolute Trajectory Error (ATE) is that it is highly sensitive to outliers. As a result, in the presence of just a few outliers, it often fails to reflect the varying accuracy as the inlier trajectory error or the number of outliers varies. In this work, we propose an alternative error metric for evaluating the accuracy of the reconstructed camera trajectory. Our metric, named Discernible Trajectory Error (DTE), is computed in four steps: (1) Shift the ground-truth and estimated trajectories such that both of their geometric medians are located at the origin. (2) Rotate the estimated trajectory such that it minimizes the sum of geodesic distances between the corresponding camera orientations. (3) Scale the estimated trajectory such that the median distance of the cameras to their geometric median is the same as that of the ground truth. (4) Compute the distances between the corresponding cameras, and obtain the DTE by taking the average of the mean and root-mean-square (RMS) distance. This metric is an attractive alternative to the ATE, in that it is capable of discerning the varying trajectory accuracy as the inlier trajectory error or the number of outliers varies. Using the similar idea, we also propose a novel rotation error metric, named Discernible Rotation Error (DRE), which has similar advantages to the DTE. Furthermore, we propose a simple yet effective method for calibrating the camera-to-marker rotation, which is needed for the computation of our metrics. Our methods are verified through extensive simulations.
translated by 谷歌翻译